7 research outputs found

    Efficient Image Stitching through Mobile Offloading

    Get PDF
    AbstractImage stitching is the task of combining images with overlapping parts to one big image. It needs a sequence of complex computation steps, especially the execution on a mobile device can take long and consume a lot of energy. Mobile offloading may alleviate those problems as it aims at improving performance and saving energy when executing complex applications on mobile devices. In this paper we investigate to which extent mobile offloading may improve the performance and energy efficiency of image stitching on mobile devices. We demonstrate our approach by stitching two or four images, but the process can be easily extended to an arbitrary number of images.We study three methods to offload parts of the computation to a resourceful server and evaluate them using several metrics. For the first offloading strategy all contributing images are sent, processed and the combined image is returned. For the second strategy images are offloaded, but not all stitching steps are executed on the remote server, and a smaller XML file is returned to the mobile client. The XML file contains a homography information which is needed by the mobile device to perform the last stitching step, the combination of the images. For the third strategy the images are transformed into grey scale before being transmitted to the server and an XML file is returned. The considered metrics are the execution time, the size of data to be transmitted and the memory usage. We find that the first strategy achieves the lowest total execution time but it requires more data to be transmitted than both the other strategies

    Toward a service-based workflow for automated information extraction from herbarium specimens

    Get PDF
    Over the past years, herbarium collections worldwide have started to digitize millions of specimens on an industrial scale. Although the imaging costs are steadily falling, capturing the accompanying label information is still predominantly done manually and develops into the principal cost factor. In order to streamline the process of capturing herbarium specimen metadata, we specified a formal extensible workflow integrating a wide range of automated specimen image analysis services. We implemented the workflow on the basis of OpenRefine together with a plugin for handling service calls and responses. The evolving system presently covers the generation of optical character recognition (OCR) from specimen images, the identification of regions of interest in images and the extraction of meaningful information items from OCR. These implementations were developed as part of the Deutsche Forschungsgemeinschaft-funded a standardised and optimised process for data acquisition from digital images of herbarium specimens (StanDAP-Herb) Project

    Using Wikibase as a Platform to Develop a Semantic TDWG Standard

    No full text
    In the ABCD 3.0 Project the ABCD (Access to Biological Collection Data) Standard (Access to Biological Collections Data task group 2007) was transformed from a classic XML Schema into an OWL (Web Ontology Language) ontology (along side an updated semantic-aware XML version). While it was initially planned to use the established TDWG Terms wiki as the editing and development platform for the ABCD ontology, the rise of Wikidata and its underlying platform Wikibase have caused us to reconsider this decision and switch to a Wikibase installation instead. This proved to be a crucial decision, as Wikibase turned out to be a well-suited platform to collaboratively import, develop and export this complex semantic standard. This experience is potentially of interest to maintainers of other Biodiversity Information Standards (TDWG) standards and the Technical Architecture Group. In this presentation we will explain our technical setup and how we used Wikibase, alongside its related tools, to model the ABCD Ontology. We will introduce the tools we used for importing existing concepts from the previous ABCD versions, running maintenance queries (e.g. for checking the ontology for consistency or missing information about concepts), and exporting the ontology into the OWL/XML format. Finally we will discuss the lessons we learned and how our setup can be improved for future uses

    Semantic Annotation of Botanical Collection Data

    No full text
    Herbarium specimens have been digitized at the Botanical Garden and Botanical Museum, Berlin (BGBM) since the year 2000. As part of the digitization process, specimen data have been recorded manually for specific basic data elements. Additional elements were usually added later based on the digital images. During the last twenty years, data were transcribed exactly as they were written on the labels, a widely used procedure in European herbaria. This approach led to a large number of orthographic variations especially with regard to person and place names. To improve interoperability between records within our own collection database and across collection databases provided by the community, we have started to enrich our metadata with Linked Open Data (LOD)-based links to semantic resources starting with collectors and geographic entities. Preferred resources for semantic enrichment (e.g., WikiData, GeoNames) have been agreed on by members of the Consortium of European Taxonomic Facilities (CETAF) in order to exploit the potential of semantically enriched collection data in the best possible way. To be able to annotate many collection records in a relatively short time, priority was given to concepts (e.g., specific collector names) that occur on many specimen labels and that have an existing and easy-to-find semantic representation in an external resource. With this approach, we were able to annotate 52,000 specimen records in just a few weeks of working time of a student assistant. The integration of our semantic annotation workflows with other data integration, cleaning, and import processes at the BGBM  is carried out using an OpenRefine-based platform with specific extensions for services and functions related to label transcription activities (Kirchhoff et al. 2018). Our semantically enriched collection data will contribute to a “Botany Pilot,” which is presently being developed by member organizations of CETAF to demonstrate the potential of Linked Open Collection Data and their integration with existing semantic resources

    ABCD 3.0 Ready to Use

    No full text
    The TDWG standard ABCD (Access to Biological Collections Data task group 2007) was aimed at harmonizing terminologies used for modelling biological collection information and is used as a comprehensive data format for transferring collection and observation data between software components. The project ABCD 3.0 (A community platform for the development and documentation of the ABCD standard for natural history collections) was financed by the German Research Council (DFG). It addressed the transformation of ABCD into a semantic web-compliant ontology by deconstructing the XML-schema into individually addressable RDF (Resource Description Framework) resources published via the TDWG Terms Wiki (https://terms.tdwg.org/wiki/ABCD_2). In a second step, informal properties and concept-relations described by the original ABCD-schema were transformed into a machine-readable ontology and revised (Güntsch et al. 2016). The project was successfully finished in January 2019. The ABCD 3 setup allows for the creation of standard-conforming application schemas. The XML variant of ABCD 3.0 was restructured, simplified and made more consistent in terms of element names and types as compared to version 2.x. The XML elements are connected to their semantic concepts using the W3C SAWSDL (Semantic Annotation for Web Services Description Language and XML Schema) standard. The creation of specialized applications schemas is encouraged, the first use case was the application schema for zoology. It will also be possible to generate application schemas that break the traditional unit-centric structure of ABCD. Further achievements of the project include creating a Wikibase instance as the editing platform, with related tools for maintenance queries, such as checking for inconsistencies in the ontology and automated export into RDF. This allows for fast iterations of new or updated versions, e.g. when additional mappings to other standards are done. The setup is agnostic to the data standard created, it can therefore also be used to create or model other standards. Mappings to other standards like Darwin Core (https://dwc.tdwg.org/) and Audubon Core (https://tdwg.github.io/ac/) are now machine readable as well. All XPaths (XML Paths) of  ABCD 3.0 XML have been mapped to all variants of ABCD 2.06 and 2.1, which will ease transition to the new standard. The ABCD 3 Ontology will also be uploaded to the GFBio Terminology Server (Karam et al. 2016), where individual concepts can be easily searched or queried, allowing for better interactive modelling of ABCD concepts. ABCD documentation now adheres to TDWG’s Standards Documentation Standard (SDS, https://www.tdwg.org/standards/sds/) and is located at https://abcd.tdwg.org/. The new site is hosted on Github: https://github.com/tdwg/abcd/tree/gh-pages
    corecore